890 research outputs found
VER: Learning Natural Language Representations for Verbalizing Entities and Relations
Entities and relationships between entities are vital in the real world.
Essentially, we understand the world by understanding entities and relations.
For instance, to understand a field, e.g., computer science, we need to
understand the relevant concepts, e.g., machine learning, and the relationships
between concepts, e.g., machine learning and artificial intelligence. To
understand a person, we should first know who he/she is and how he/she is
related to others. To understand entities and relations, humans may refer to
natural language descriptions. For instance, when learning a new scientific
term, people usually start by reading its definition in dictionaries or
encyclopedias. To know the relationship between two entities, humans tend to
create a sentence to connect them. In this paper, we propose VER: A Unified
Model for Verbalizing Entities and Relations. Specifically, we attempt to build
a system that takes any entity or entity set as input and generates a sentence
to represent entities and relations, named ``natural language representation''.
Extensive experiments demonstrate that our model can generate high-quality
sentences describing entities and entity relationships and facilitate various
tasks on entities and relations, including definition modeling, relation
modeling, and generative commonsense reasoning
Citation: A Key to Building Responsible and Accountable Large Language Models
Large Language Models (LLMs) bring transformative benefits alongside unique
challenges, including intellectual property (IP) and ethical concerns. This
position paper explores a novel angle to mitigate these risks, drawing
parallels between LLMs and established web systems. We identify "citation" -
the acknowledgement or reference to a source or evidence - as a crucial yet
missing component in LLMs. Incorporating citation could enhance content
transparency and verifiability, thereby confronting the IP and ethical issues
in the deployment of LLMs. We further propose that a comprehensive citation
mechanism for LLMs should account for both non-parametric and parametric
content. Despite the complexity of implementing such a citation mechanism,
along with the potential pitfalls, we advocate for its development. Building on
this foundation, we outline several research problems in this area, aiming to
guide future explorations towards building more responsible and accountable
LLMs
- …